5 research outputs found

    FPGA-based Accelerators for cryptography

    Get PDF
    Cryptography involves mathematical theory and encryption meth- ods. Cryptography algorithms are designed around computational hardness assumptions. This leads to heavy computational intensive algorithms. Sometimes a software approach could not be enough, but a hardware approach could be very complex. In this project, we present a halfway between software and hardware approach using an FPGA. The intended outcome of the project is the design and development of two hardware-based accelerators for cryptography that can be dynamically loaded into the FPGA. Mul- tiple approaches are presented during the project in order to design and test the accelerators

    TensorDash: Exploiting Sparsity to Accelerate Deep Neural Network Training and Inference

    Full text link
    TensorDash is a hardware level technique for enabling data-parallel MAC units to take advantage of sparsity in their input operand streams. When used to compose a hardware accelerator for deep learning, TensorDash can speedup the training process while also increasing energy efficiency. TensorDash combines a low-cost, sparse input operand interconnect comprising an 8-input multiplexer per multiplier input, with an area-efficient hardware scheduler. While the interconnect allows a very limited set of movements per operand, the scheduler can effectively extract sparsity when it is present in the activations, weights or gradients of neural networks. Over a wide set of models covering various applications, TensorDash accelerates the training process by 1.95×1.95{\times} while being 1.89×1.89\times more energy-efficient, 1.6×1.6\times more energy efficient when taking on-chip and off-chip memory accesses into account. While TensorDash works with any datatype, we demonstrate it with both single-precision floating-point units and bfloat16

    Amplifying On-Chip Memory Storage Capacity via Compression for Neural Network Workloads

    No full text
    Deep Neural Networks (DNNs) have been proven to be state-of-the-art for many applications. DNNs are an excellent target for hardware acceleration for several reasons: they are essentially a large collection of independent multiply-accumulate operations, they require large amounts of data to be transferred, and they are deployed in an ever increasing set of applications. Hence, hardware acceleration for DNNs has been a highly active area in research and development where new designs and techniques are proposed to achieve faster and more energy-efficient DNN execution on hardware. The need to investigate such hardware acceleration techniques requires tools that allow us to experiment with such methods and to estimate their potential performance and energy benets. Accordingly, this thesis targets two main contributions: 1) DNNsim, a DNN hardware accelerators simulator to evaluate new designs, and 2) Boveda, an on-chip memory compression technique designed for Deep Learning accelerator memory hierarchies.M.A.S
    corecore